Synthetic data offers the promise of cheap and bountiful training data for settings where lots of labeled real-world data for tasks is unavailable. However, models trained on synthetic data significantly underperform on real-world data. In this paper, we propose Proportional Amplitude Spectrum Training Augmentation (PASTA), a simple and effective augmentation strategy to improve out-of-the-box synthetic-to-real (syn-to-real) generalization performance. PASTA involves perturbing the amplitude spectrums of the synthetic images in the Fourier domain to generate augmented views. We design PASTA to perturb the amplitude spectrums in a structured manner such that high-frequency components are perturbed relatively more than the low-frequency ones. For the tasks of semantic segmentation (GTAV to Real), object detection (Sim10K to Real), and object recognition (VisDA-C Syn to Real), across a total of 5 syn-to-real shifts, we find that PASTA outperforms more complex state-of-the-art generalization methods while being complementary to the same.
translated by 谷歌翻译
尽管变压器已经开始在视力中占主导地位,但将它们应用于大图像仍然很困难。这样做的一个很大的原因是,自我发场的标记数二次缩放,而令牌数量又随图像大小而倍增。在较大的图像(例如1080p)上,网络中总计算的60%以上仅用于创建和应用注意矩阵。我们通过引入Hydra注意来解决这个问题,这是视觉变压器(VITS)的极有效的关注操作。自相矛盾的是,这种效率来自对其极端的多头关注:通过使用尽可能多的注意力头部,Hydra注意力在代币和没有隐藏常数的特征上是线性的,使其比标准自我注意力要快得多。在现成的VIT-B/16中,代币计数的一倍。此外,Hydra注意力保留了ImageNet上的高精度,在某些情况下实际上可以改善它。
translated by 谷歌翻译
我们提出了一种可扩展的方法,用于学习开放世界对象目标导航(ObjectNAV) - 要求虚拟机器人(代理)在未探索的环境中找到对象的任何实例(例如,“查找接收器”)。我们的方法完全是零拍的 - 即,它不需要任何形式的objectNav奖励或演示。取而代之的是,我们训练图像目标导航(ImagenAv)任务,在该任务中,代理在其中找到了捕获图片(即目标图像)的位置。具体而言,我们将目标图像编码为多模式的语义嵌入空间,以在未注释的3D环境(例如HM3D)中以大规模训练语义目标导航(Senanticnav)代理。训练后,可以指示Semanticnav代理查找以自由形式的自然语言描述的对象(例如,“接收器”,“浴室水槽”等),通过将语言目标投射到相同的多模式,语义嵌入空间中。结果,我们的方法启用了开放世界的ObjectNAV。我们在三个ObjectNAV数据集(Gibson,HM3D和MP3D)上广泛评估了我们的代理商,并观察到成功的4.2%-20.0%的绝对改进。作为参考,这些收益与2020年至2021年Objectnav挑战赛竞争对手之间成功的5%改善相似或更好。在开放世界的环境中,我们发现我们的代理商可以概括为明确提到的房间(例如,“找到厨房水槽”)的复合说明,并且何时可以推断目标室(例如,”找到水槽和炉子”)。
translated by 谷歌翻译
视觉域的适应性(DA)试图将经过训练的模型转移到分发转移的未看到的,未标记的域,但是方法通常着重于适应卷积神经网络体系结构,并使用有监督的成像网表示。在这项工作中,我们将重点转移到将现代体系结构改编成对象识别的重点 - 越来越流行的视觉变压器(VIT)以及基于自我监督的学习(SSL)的现代预测。受到最新SSL方法的启发,该方法是基于通过掩盖或裁剪生成的部分图像输入的学习的 - 要么通过学习预测缺失的像素或学习代表性的不断增强来进行这种增强 - 我们提出了简单的两阶段适应性PACMAC自我监督VIT的算法。 PACMAC首先在汇总源和目标数据上执行内域SSL,以学习任务歧视性特征,然后探究该模型的预测一致性,这些歧视性的一致性是通过新的注意力条件掩盖策略生成的一组部分目标输入,以识别自我候选者的可靠候选者-训练。我们的简单方法导致对使用VIT和对标准对象识别基准的自我监督初始化的竞争方法的性能一致。可在https://github.com/virajprabhu/pacmac上找到代码
translated by 谷歌翻译
随着普雷雷达的深入学习模型的优势,从模型银行获取现货,找到最佳重量,以便对您的用途进行微调,可以是令人生畏的任务。最近提出了几种方法来寻找转移学习的好模型,但他们要么对大型模型银行进行速度,要么对现成的外在模型的多样性表现不佳。理想情况下,我们要回答的问题是“给定一些数据和源模型,您是否可以在微调后快速预测模型的准确性?”在本文中,我们将此设置形式形式为“可扩展的不同模型选择”,并提出了几个用于评估此任务的基准。我们发现现有的模型选择和可转换性估计方法在这里表现不佳并分析为什么这是如此。然后,我们介绍简单的技术来提高这些算法的性能和速度。最后,我们迭代现有方法来创建PARC,这优于各种模型选择的所有其他方法。我们已经发布了基准和方法代码,希望能够激发可访问的转移学习的模型选择中的未来工作。
translated by 谷歌翻译
域自适应语义分割的大多数现代方法依赖于适应期间继续访问源数据,这可能是由于计算或隐私约束而不可行的。我们专注于对语义分割的无源域适应,其中源模型必须仅为仅给出未标记的目标数据给出的新目标域。我们提出了增强一致性引导的自我培训(ATHCO),一种无源适应算法,它使用模型的像素级预测一致性,各种目标图像的自动生成的视图以及模型置信度来识别可靠的像素预测,并选择性地那些人的自动训练。ATHCO在三个标准基准测试中实现最先进的结果,以便在语义分割中的3个标准基准,所有内部都在实现和快速收敛方法中。
translated by 谷歌翻译
Domain adaptation is critical for success in new, unseen environments. Adversarial adaptation models applied in feature spaces discover domain invariant representations, but are difficult to visualize and sometimes fail to capture pixel-level and low-level domain shifts. Recent work has shown that generative adversarial networks combined with cycle-consistency constraints are surprisingly effective at mapping images between domains, even without the use of aligned image pairs. We propose a novel discriminatively-trained Cycle-Consistent Adversarial Domain Adaptation model. CyCADA adapts representations at both the pixel-level and feature-level, enforces cycle-consistency while leveraging a task loss, and does not require aligned pairs. Our model can be applied in a variety of visual recognition and prediction settings. We show new state-of-the-art results across multiple adaptation tasks, including digit classification and semantic segmentation of road scenes demonstrating transfer from synthetic to real world domains.
translated by 谷歌翻译
Adversarial learning methods are a promising approach to training robust deep networks, and can generate complex samples across diverse domains. They also can improve recognition despite the presence of domain shift or dataset bias: several adversarial approaches to unsupervised domain adaptation have recently been introduced, which reduce the difference between the training and test domain distributions and thus improve generalization performance. Prior generative approaches show compelling visualizations, but are not optimal on discriminative tasks and can be limited to smaller shifts. Prior discriminative approaches could handle larger domain shifts, but imposed tied weights on the model and did not exploit a GAN-based loss. We first outline a novel generalized framework for adversarial adaptation, which subsumes recent state-of-the-art approaches as special cases, and we use this generalized view to better relate the prior approaches. We propose a previously unexplored instance of our general framework which combines discriminative modeling, untied weight sharing, and a GAN loss, which we call Adversarial Discriminative Domain Adaptation (ADDA). We show that ADDA is more effective yet considerably simpler than competing domain-adversarial methods, and demonstrate the promise of our approach by exceeding state-of-the-art unsupervised adaptation results on standard cross-domain digit classification tasks and a new more difficult cross-modality object classification task.
translated by 谷歌翻译
Recent reports suggest that a generic supervised deep CNN model trained on a large-scale dataset reduces, but does not remove, dataset bias. Fine-tuning deep models in a new domain can require a significant amount of labeled data, which for many applications is simply not available. We propose a new CNN architecture to exploit unlabeled and sparsely labeled target domain data. Our approach simultaneously optimizes for domain invariance to facilitate domain transfer and uses a soft label distribution matching loss to transfer information between tasks. Our proposed adaptation method offers empirical performance which exceeds previously published results on two standard benchmark visual domain adaptation tasks, evaluated across supervised and semi-supervised adaptation settings.
translated by 谷歌翻译
We evaluate whether features extracted from the activation of a deep convolutional network trained in a fully supervised fashion on a large, fixed set of object recognition tasks can be repurposed to novel generic tasks. Our generic tasks may differ significantly from the originally trained tasks and there may be insufficient labeled or unlabeled data to conventionally train or adapt a deep architecture to the new tasks. We investigate and visualize the semantic clustering of deep convolutional features with respect to a variety of such tasks, including scene recognition, domain adaptation, and fine-grained recognition challenges. We compare the efficacy of relying on various network levels to define a fixed feature, and report novel results that significantly outperform the state-of-the-art on several important vision challenges. We are releasing DeCAF, an open-source implementation of these deep convolutional activation features, along with all associated network parameters to enable vision researchers to be able to conduct experimentation with deep representations across a range of visual concept learning paradigms.
translated by 谷歌翻译